39 research outputs found

    On the use of rational-function fitting methods for the solution of 2D Laplace boundary-value problems

    Get PDF
    A computational scheme for solving 2D Laplace boundary-value problems using rational functions as the basis functions is described. The scheme belongs to the class of desingularized methods, for which the location of singularities and testing points is a major issue that is addressed by the proposed scheme, in the context of the 2D Laplace equation. Well-established rational-function fitting techniques are used to set the poles, while residues are determined by enforcing the boundary conditions in the least-squares sense at the nodes of rational Gauss-Chebyshev quadrature rules. Numerical results show that errors approaching the machine epsilon can be obtained for sharp and almost sharp corners, nearly-touching boundaries, and almost-singular boundary data. We show various examples of these cases in which the method yields compact solutions, requiring fewer basis functions than the Nystr\"{o}m method, for the same accuracy. A scheme for solving fairly large-scale problems is also presented

    A novel boundary element method using surface conductive absorbers for full-wave analysis of 3-D nanophotonics

    Full text link
    Fast surface integral equation (SIE) solvers seem to be ideal approaches for simulating 3-D nanophotonic devices, as these devices generate fields both in an interior channel and in the infinite exterior domain. However, many devices of interest, such as optical couplers, have channels that can not be terminated without generating reflections. Generating absorbers for these channels is a new problem for SIE methods, as the methods were initially developed for problems with finite surfaces. In this paper we show that the obvious approach for eliminating reflections, making the channel mildly conductive outside the domain of interest, is inaccurate. We describe a new method, in which the absorber has a gradually increasing surface conductivity; such an absorber can be easily incorporated in fast integral equation solvers. Numerical experiments from a surface-conductivity modified FFT-accelerated PMCHW-based solver are correlated with analytic results, demonstrating that this new method is orders of magnitude more effective than a volume absorber, and that the smoothness of the surface conductivity function determines the performance of the absorber. In particular, we show that the magnitude of the transition reflection is proportional to 1/L^(2d+2), where L is the absorber length and d is the order of the differentiability of the surface conductivity function.Comment: 10 page

    Evidence for increasing global wheat yield potential

    Get PDF
    Wheat is the most widely grown food crop, with 761 Mt produced globally in 2020. To meet the expected grain demand by mid-century, wheat breeding strategies must continue to improve upon yield-advancing physiological traits, regardless of climate change impacts. Here, the best performing doubled haploid (DH) crosses with an increased canopy photosynthesis from wheat field experiments in the literature were extrapolated to the global scale with a multi-model ensemble of process-based wheat crop models to estimate global wheat production. The DH field experiments were also used to determine a quantitative relationship between wheat production and solar radiation to estimate genetic yield potential. The multi-model ensemble projected a global annual wheat production of 1050 +/- 145 Mt due to the improved canopy photosynthesis, a 37% increase, without expanding cropping area. Achieving this genetic yield potential would meet the lower estimate of the projected grain demand in 2050, albeit with considerable challenges

    The chaos in calibrating crop models

    Full text link
    Calibration, the estimation of model parameters based on fitting the model to experimental data, is among the first steps in many applications of system models and has an important impact on simulated values. Here we propose and illustrate a novel method of developing guidelines for calibration of system models. Our example is calibration of the phenology component of crop models. The approach is based on a multi-model study, where all teams are provided with the same data and asked to return simulations for the same conditions. All teams are asked to document in detail their calibration approach, including choices with respect to criteria for best parameters, choice of parameters to estimate and software. Based on an analysis of the advantages and disadvantages of the various choices, we propose calibration recommendations that cover a comprehensive list of decisions and that are based on actual practices.HighlightsWe propose a new approach to deriving calibration recommendations for system modelsApproach is based on analyzing calibration in multi-model simulation exercisesResulting recommendations are holistic and anchored in actual practiceWe apply the approach to calibration of crop models used to simulate phenologyRecommendations concern: objective function, parameters to estimate, software usedCompeting Interest StatementThe authors have declared no competing interest

    Proposal and extensive test of a calibration protocol for crop phenology models

    Get PDF
    A major effect of environment on crops is through crop phenology, and therefore, the capacity to predict phenology for new environments is important. Mechanistic crop models are a major tool for such predictions, but calibration of crop phenology models is difficult and there is no consensus on the best approach. We propose an original, detailed approach for calibration of such models, which we refer to as a calibration protocol. The protocol covers all the steps in the calibration workflow, namely choice of default parameter values, choice of objective function, choice of parameters to estimate from the data, calculation of optimal parameter values, and diagnostics. The major innovation is in the choice of which parameters to estimate from the data, which combines expert knowledge and data-based model selection. First, almost additive parameters are identified and estimated. This should make bias (average difference between observed and simulated values) nearly zero. These are "obligatory" parameters, that will definitely be estimated. Then candidate parameters are identified, which are parameters likely to explain the remaining discrepancies between simulated and observed values. A candidate is only added to the list of parameters to estimate if it leads to a reduction in BIC (Bayesian Information Criterion), which is a model selection criterion. A second original aspect of the protocol is the specification of documentation for each stage of the protocol. The protocol was applied by 19 modeling teams to three data sets for wheat phenology. All teams first calibrated their model using their "usual" calibration approach, so it was possible to compare usual and protocol calibration. Evaluation of prediction error was based on data from sites and years not represented in the training data. Compared to usual calibration, calibration following the new protocol reduced the variability between modeling teams by 22% and reduced prediction error by 11%
    corecore